Literaturnachweis - Detailanzeige
Autor/in | Almond, Russell G. |
---|---|
Titel | Using Automated Essay Scores as an Anchor When Equating Constructed Response Writing Tests |
Quelle | In: International Journal of Testing, 14 (2014) 1, S.73-91 (19 Seiten)Infoseite zur Zeitschrift
PDF als Volltext |
Sprache | englisch |
Dokumenttyp | gedruckt; online; Zeitschriftenaufsatz |
ISSN | 1530-5058 |
DOI | 10.1080/15305058.2013.816309 |
Schlagwörter | Automation; Equated Scores; Writing Tests; Essay Tests; Scoring; College Entrance Examinations; Graduate Record Examinations |
Abstract | Assessments consisting of only a few extended constructed response items (essays) are not typically equated using anchor test designs as there are typically too few essay prompts in each form to allow for meaningful equating. This article explores the idea that output from an automated scoring program designed to measure writing fluency (a common objective of many writing prompts) can be used in place of a more traditional anchor. The linear-logistic equating method used in this article is a variant of the Tucker linear equating method appropriate for the limited score range typical of essays. The procedure is applied to historical data. Although the procedure only results in small improvements over identity equating (not equating prompts), it does produce a viable alternative, and a mechanism for checking that the identity equating is appropriate. This may be particularly useful for measuring rater drift or equating mixed format tests. (As Provided). |
Anmerkungen | Routledge. Available from: Taylor & Francis, Ltd. 325 Chestnut Street Suite 800, Philadelphia, PA 19106. Tel: 800-354-1420; Fax: 215-625-2940; Web site: http://www.tandf.co.uk/journals |
Erfasst von | ERIC (Education Resources Information Center), Washington, DC |
Update | 2017/4/10 |